138 research outputs found

    On the stable recovery of the sparsest overcomplete representations in presence of noise

    Full text link
    Let x be a signal to be sparsely decomposed over a redundant dictionary A, i.e., a sparse coefficient vector s has to be found such that x=As. It is known that this problem is inherently unstable against noise, and to overcome this instability, the authors of [Stable Recovery; Donoho et.al., 2006] have proposed to use an "approximate" decomposition, that is, a decomposition satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x = As. Then, they have shown that if there is a decomposition with ||s||_0 < (1+M^{-1})/2, where M denotes the coherence of the dictionary, this decomposition would be stable against noise. On the other hand, it is known that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its stability against noise has been proved only for highly more restrictive decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2 << spark(A)/2. This limitation maybe had not been very important before, because ||s||_0 < (1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition can be found via minimizing the L1 norm, a classic approach for sparse decomposition. However, with the availability of new algorithms for sparse decomposition, namely SL0 and Robust-SL0, it would be important to know whether or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2 are stable. In this paper, we show that such decompositions are indeed stable. In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all unique sparse decompositions are stably recoverable". Moreover, we see that sparser decompositions are "more stable".Comment: Accepted in IEEE Trans on SP on 4 May 2010. (c) 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other work

    A fast approach for overcomplete sparse decomposition based on smoothed L0 norm

    Full text link
    In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined Sparse Component Analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm tries to directly minimize the L0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes, see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing erroneousl

    A First Step to Convolutive Sparse Representation

    Full text link
    In this paper an extension of the sparse decomposition problem is considered and an algorithm for solving it is presented. In this extension, it is known that one of the shifted versions of a signal s (not necessarily the original signal itself) has a sparse representation on an overcomplete dictionary, and we are looking for the sparsest representation among the representations of all the shifted versions of s. Then, the proposed algorithm finds simultaneously the amount of the required shift, and the sparse representation. Experimental results emphasize on the performance of our algorithm.Comment: 4 Pages-In Proceeding of ICASSP 200

    Recovery of Low-Rank Matrices under Affine Constraints via a Smoothed Rank Function

    Full text link
    In this paper, the problem of matrix rank minimization under affine constraints is addressed. The state-of-the-art algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically improves recovery limits on the rank of the solution. This approximation leads to a non-convex program; thus, to avoid getting trapped in local solutions, we use the following scheme. Initially, a rough approximation of the rank function subject to the affine constraints is optimized. As the algorithm proceeds, finer approximations of the rank are optimized and the solver is initialized with the solution of the previous approximation until reaching the desired accuracy. On the theoretical side, benefiting from the spherical section property, we will show that the sequence of the solutions of the approximating function converges to the minimum rank solution. On the experimental side, it will be shown that the proposed algorithm, termed SRF standing for Smoothed Rank Function, can recover matrices which are unique solutions of the rank minimization problem and yet not recoverable by nuclear norm minimization. Furthermore, it will be demonstrated that, in completing partially observed matrices, the accuracy of SRF is considerably and consistently better than some famous algorithms when the number of revealed entries is close to the minimum number of parameters that uniquely represent a low-rank matrix.Comment: Accepted in IEEE TSP on December 4th, 201

    Learning Overcomplete Dictionaries Based on Atom-by-Atom Updating

    No full text
    International audienceA dictionary learning algorithm learns a set of atoms from some training signals in such a way that each signal can be approximated as a linear combination of only a few atoms. Most dictionary learning algorithms use a two-stage iterative procedure. The first stage is to spars ely approximate the training signals over the current dictionary. The second stage is the update of the dictionary. In this paper we develop some atom-by-atom dictionary learning algorithms, which update the atoms sequentially. Specifically, we propose an efficient alternative to the well-known K-SVD algorithm, and show by various experiments that the proposed algorithm is much faster than K-SVD while its results are better. Moreover, we propose a novel algorithm that instead of alternating between the two dictionary learning stages, performs only the second stage. While in K-SVD each atom is updated along with the nonzero entries of its associated row vector in the coefficient matrix (which we name it its profile), in the new algorithm, each atom is updated along with the whole entries of its profile. As a result, contrary to K-SVD, the support of each profile can be changed while updating the dictionary. To further accelerate the convergence of this algorithm and to have a control on the cardinality of the representations, we then propose its two-stage counterpart by adding the sparse approximation stage. Experimental results on recovery of a known synthetic dictionary and dictionary learning for a class of auto-regressive signals demonstrate the promising performance of the proposed algorithms
    • …
    corecore